Accurate Solution of Structured Least Squares Problems via Rank-Revealing Decompositions

نویسندگان

  • Nieves Castro-González
  • Johan Ceballos
  • Froilán M. Dopico
  • Juan M. Molera
چکیده

Least squares problems minx ‖b−Ax‖2 where the matrix A ∈ Cm×n (m ≥ n) has some particular structure arise frequently in applications. Polynomial data fitting is a well-known instance of problems that yield highly structured matrices, but many other examples exist. Very often, structured matrices have huge condition numbers κ2(A) = ‖A‖2 ‖A†‖2 (A† is the Moore-Penrose pseudo-inverse ofA) and, therefore, standard algorithms fail to compute accurate minimum 2-norm solutions of least squares problems. In this work, we introduce a framework that allows us to compute minimum 2-norm solutions of many classes of structured least squares problems accurately, i.e., with errors ‖x̂0 − x0‖2/‖x0‖2 = O(u), where u is the unit roundoff, independently of the magnitude of κ2(A) for most vectors b. The cost of these accurate computations is O(n2m) flops, i.e., roughly the same cost as standard algorithms for least squares problems. The approach in this work relies in computing first an accurate rank-revealing decomposition of A, an idea that has been widely used in the last decades to compute, for structured ill-conditioned matrices, singular value decompositions, eigenvalues and eigenvectors in the Hermitian case, and solutions of linear systems with high relative accuracy. In order to prove that accurate solutions are computed, it is needed to develop a multiplicative perturbation theory of least squares problems. The results presented in this paper are valid in the case of both full rank and rank deficient problems and also in the case of underdetermined linear systems (m < n). Among other types of matrices, the new method applies to rectangular Cauchy, Vandermonde, and graded matrices and detailed numerical tests for Cauchy matrices are presented.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Tensor Decompositions with Banded Matrix Factors

The computation of themodel parameters of a Canonical Polyadic Decomposition (CPD), also known as the parallel factor (PARAFAC) or canonical decomposition (CANDECOMP) or CP decomposition, is typically done by resorting to iterative algorithms, e.g. either iterative alternating least squares type or descent methods. In many practical problems involving tensor decompositions such as signal proces...

متن کامل

Accurate Symmetric Rank Revealing and Eigendecompositions of Symmetric Structured Matrices

We present new O(n3) algorithms that compute eigenvalues and eigenvectors to high relative accuracy in floating point arithmetic for the following types of matrices: symmetric Cauchy, symmetric diagonally scaled Cauchy, symmetric Vandermonde, and symmetric totally nonnegative matrices when they are given as products of nonnegative bidiagonal factors. The algorithms are divided into two stages: ...

متن کامل

Estimating a Few Extreme Singular Values and Vectors for Large-Scale Matrices in Tensor Train Format

We propose new algorithms for singular value decomposition (SVD) of very large-scale matrices based on a low-rank tensor approximation technique called the tensor train (TT) format. The proposed algorithms can compute several dominant singular values and corresponding singular vectors for large-scale structured matrices given in a TT format. The computational complexity of the proposed methods ...

متن کامل

Superlinearly convergent exact penalty projected structured Hessian updating schemes for constrained nonlinear least squares: asymptotic analysis

We present a structured algorithm for solving constrained nonlinear least squares problems, and establish its local two-step Q-superlinear convergence. The approach is based on an adaptive structured scheme due to Mahdavi-Amiri and Bartels of the exact penalty method of Coleman and Conn for nonlinearly constrained optimization problems. The structured adaptation also makes use of the ideas of N...

متن کامل

Randomized Alternating Least Squares for Canonical Tensor Decompositions: Application to A PDE With Random Data

This paper introduces a randomized variation of the alternating least squares (ALS) algorithm for rank reduction of canonical tensor formats. The aim is to address the potential numerical ill-conditioning of least squares matrices at each ALS iteration. The proposed algorithm, dubbed randomized ALS, mitigates large condition numbers via projections onto random tensors, a technique inspired by w...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • SIAM J. Matrix Analysis Applications

دوره 34  شماره 

صفحات  -

تاریخ انتشار 2013